Muller's Method
   HOME

TheInfoList



OR:

Muller's method is a
root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by
David E. Muller David Eugene Muller (November 2, 1924 – April 27, 2008) was an American mathematician and computer scientist. He was a professor of mathematics and computer science at the University of Illinois (1953–92), after which he became an emeritu ...
in 1956. Muller's method is based on the
secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation o ...
, which constructs at every iteration a line through two points on the graph of ''f''. Instead, Muller's method uses three points, constructs the
parabola In mathematics, a parabola is a plane curve which is mirror-symmetrical and is approximately U-shaped. It fits several superficially different mathematical descriptions, which can all be proved to define exactly the same curves. One descript ...
through these three points, and takes the intersection of the ''x''-axis with the parabola to be the next approximation.


Recurrence relation

Muller's method is a recursive method which generates an approximation of the
root In vascular plants, the roots are the organs of a plant that are modified to provide anchorage for the plant and take in water and nutrients into the plant body, which allows plants to grow taller and faster. They are most often below the sur ...
ξ of ''f'' at each iteration. Starting with the three initial values ''x''0, ''x''−1 and ''x''−2, the first iteration calculates the first approximation ''x''1, the second iteration calculates the second approximation ''x''2, the third iteration calculates the third approximation ''x''3, etc. Hence the ''k''''th'' iteration generates approximation ''x''''k''. Each iteration takes as input the last three generated approximations and the value of ''f'' at these approximations. Hence the ''k''''th'' iteration takes as input the values ''x''''k''-1, ''x''''k''-2 and ''x''''k''-3 and the function values ''f''(''x''''k''-1), ''f''(''x''''k''-2) and ''f''(''x''''k''-3). The approximation ''x''''k'' is calculated as follows. A parabola ''y''''k''(''x'') is constructed which goes through the three points (''x''''k''-1, ''f''(''x''''k''-1)), (''x''''k''-2, ''f''(''x''''k''-2)) and (''x''''k''-3, ''f''(''x''''k''-3)). When written in the Newton form, ''y''''k''(''x'') is : y_k(x) = f(x_) + (x-x_) f
_, x_ The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
+ (x-x_) (x-x_) f
_, x_, x_ The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
\, where ''f'' 'x''''k''-1, ''x''''k''-2and ''f'' 'x''''k''-1, ''x''''k''-2, ''x''''k''-3denote
divided differences In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its o ...
. This can be rewritten as : y_k(x) = f(x_) + w(x-x_) + f
_, x_, x_ The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
\, (x-x_)^2 \, where : w = f _,x_+ f _,x_- f _,x_ \, The next iterate ''x''''k'' is now given as the solution closest to ''x''''k''-1 of the quadratic equation ''y''''k''(''x'') = 0. This yields the
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
: x_ = x_ - \frac. In this formula, the sign should be chosen such that the denominator is as large as possible in magnitude. We do not use the standard formula for solving
quadratic equation In algebra, a quadratic equation () is any equation that can be rearranged in standard form as ax^2 + bx + c = 0\,, where represents an unknown (mathematics), unknown value, and , , and represent known numbers, where . (If and then the equati ...
s because that may lead to
loss of significance In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
. Note that ''x''''k'' can be complex, even if the previous iterates were all real. This is in contrast with other root-finding algorithms like the
secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation o ...
, Sidi's generalized secant method or
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
, whose iterates will remain real if one starts with real numbers. Having complex iterates can be an advantage (if one is looking for complex roots) or a disadvantage (if it is known that all roots are real), depending on the problem.


Speed of convergence

The
order of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of co ...
of Muller's method is approximately 1.84. This can be compared with 1.62 for the
secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation o ...
and 2 for
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. So, the secant method makes less progress per iteration than Muller's method and Newton's method makes more progress. More precisely, if ξ denotes a single root of ''f'' (so ''f''(ξ) = 0 and ''f'''(ξ) ≠ 0), ''f'' is three times continuously differentiable, and the initial guesses ''x''0, ''x''1, and ''x''2 are taken sufficiently close to ξ, then the iterates satisfy : \lim_ \frac = \left, \frac \^, where μ ≈ 1.84 is the positive solution of x^3 - x^2 - x - 1 = 0 .


Generalizations and related methods

Muller's method fits a parabola, i.e. a second-order
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exa ...
, to the last three obtained points ''f''(''x''''k''-1), ''f''(''x''''k''-2) and ''f''(''x''''k''-3) in each iteration. One can generalize this and fit a polynomial ''p''''k,m''(''x'') of
degree Degree may refer to: As a unit of measurement * Degree (angle), a unit of angle measurement ** Degree of geographical latitude ** Degree of geographical longitude * Degree symbol (°), a notation used in science, engineering, and mathematics ...
''m'' to the last ''m''+1 points in the ''k''''th'' iteration. Our parabola ''y''''k'' is written as ''p''''k'',2 in this notation. The degree ''m'' must be 1 or larger. The next approximation ''x''''k'' is now one of the roots of the ''p''''k,m'', i.e. one of the solutions of ''p''''k,m''(''x'')=0. Taking ''m''=1 we obtain the secant method whereas ''m''=2 gives Muller's method. Muller calculated that the sequence generated this way converges to the root ξ with an order μ''m'' where μ''m'' is the positive solution of x^ - x^m - x^ - \dots - x - 1 = 0 . The method is much more difficult though for ''m''>2 than it is for ''m''=1 or ''m''=2 because it is much harder to determine the roots of a polynomial of degree 3 or higher. Another problem is that there seems no prescription of which of the roots of ''p''''k,m'' to pick as the next approximation ''x''''k'' for ''m''>2. These difficulties are overcome by Sidi's generalized secant method which also employs the polynomial ''p''''k,m''. Instead of trying to solve ''p''''k,m''(''x'')=0, the next approximation ''x''''k'' is calculated with the aid of the derivative of ''p''''k,m'' at ''x''''k''-1 in this method.


Computational example

Below, Muller's method is implemented in the
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
programming language. It is then applied to find a root of the function . from typing import * from cmath import sqrt # Use the complex sqrt as we may generate complex numbers Num = Union loat, complexFunc = Callable Num Num] def div_diff(f: Func, xs: List um: """Calculate the divided difference f 0, x1, ...""" if len(xs)

2: a, b = xs return (f(a) - f(b)) / (a - b) else: return (div_diff(f, xs : - div_diff(f, xs :-1) / (xs 1- xs def mullers_method(f: Func, xs: (Num, Num, Num), iterations: int) -> float: """Return the root calculated using Muller's method.""" x0, x1, x2 = xs for _ in range(iterations): w = div_diff(f, (x2, x1)) + div_diff(f, (x2, x0)) - div_diff(f, (x2, x1)) s_delta = sqrt(w ** 2 - 4 * f(x2) * div_diff(f, (x2, x1, x0))) denoms = + s_delta, w - s_delta # Take the higher-magnitude denominator x3 = x2 - 2 * f(x2) / max(denoms, key=abs) # Advance x0, x1, x2 = x1, x2, x3 return x3 def f_example(x: Num) -> Num: """The example function. With a more expensive function, memoization of the last 4 points called may be useful.""" return x ** 2 - 612 root = mullers_method(f_example, (10, 20, 30), 5) print("Root: ".format(root)) # Root: (24.738633317099097+0j)


References

* Muller, David E., "A Method for Solving Algebraic Equations Using an Automatic Computer," ''Mathematical Tables and Other Aids to Computation'', 10 (1956), 208–215. * Atkinson, Kendall E. (1989). ''An Introduction to Numerical Analysis'', 2nd edition, Section 2.4. John Wiley & Sons, New York. . * Burden, R. L. and Faires, J. D. ''Numerical Analysis'', 4th edition, pages 77ff. *


Further reading

* A bracketing variant with global convergence: {{DEFAULTSORT:Mullers method Root-finding algorithms